Missing data is a common concern in health datasets, and its impact on good decision-making processes is well documented. Our study's contribution is a methodology for tackling missing data problems using a combination of synthetic dataset generation, missing data imputation and deep learning methods to resolve missing data challenges. Specifically, we conducted a series of experiments with these objectives; $a)$ generating a realistic synthetic dataset, $b)$ simulating data missingness, $c)$ recovering the missing data, and $d)$ analyzing imputation performance. Our methodology used a gaussian mixture model whose parameters were learned from a cleaned subset of a real demographic and health dataset to generate the synthetic data. We simulated various missingness degrees ranging from $10 \%$, $20 \%$, $30 \%$, and $40\%$ under the missing completely at random scheme MCAR. We used an integrated performance analysis framework involving clustering, classification and direct imputation analysis. Our results show that models trained on synthetic and imputed datasets could make predictions with an accuracy of $83 \%$ and $80 \%$ on $a) $ an unseen real dataset and $b)$ an unseen reserved synthetic test dataset, respectively. Moreover, the models that used the DAE method for imputed yielded the lowest log loss an indication of good performance, even though the accuracy measures were slightly lower. In conclusion, our work demonstrates that using our methodology, one can reverse engineer a solution to resolve missingness on an unseen dataset with missingness. Moreover, though we used a health dataset, our methodology can be utilized in other contexts.
translated by 谷歌翻译
已经显示混合方法以在预测任务中以纯粹的统计和纯粹的深度学习方法优于预测,并定量与这些预测(预测间隔)的相关不确定性。一个示例是指数平滑复发性神经网络(ES-RNN),统计预测模型和经常性神经网络变体之间的混合。 ES-RNN在Makridakis-4预测竞争中实现了9.4 \%的绝对错误。这种改进和类似的混合模型的表现主要是仅在单变量数据集上展示。将混合预测方法应用于多变量数据的困难包括($ i $)的高参数调整所涉及的高计算成本,用于与数据中固有的自动关联相关的模型(II $)挑战,以及( $ iii $)在可能难以捕获的协变量之间的复杂依赖(交叉相关)。本文介绍了多变量指数平滑的长短短期记忆(MES-LSTM),对ES-RNN的广义多元扩展,克服了这些挑战。 MES-LSTM利用了矢量化实现。我们在2019年(Covid-19)发病率数据集的几种聚集冠状病毒病中测试MES-LSTM,并发现我们的混合方法在预测准确性和预测间隔建设下对纯统计和深度学习方法进行了一致的,显着改善。
translated by 谷歌翻译
我们调查预测中的合奏技术,并检查其使用与Covid-19大流行早期类似的非季度时间系列的潜力。开发改进的预测方法是必不可少的,因为它们在关键阶段为组织和决策者提供数据驱动的决策。我们建议使用后期数据融合,使用两个预测模型的堆叠集合和两个元特征,并在初步预测阶段证明其预测力。最终的集合包括先知和长期短期内存(LSTM)神经网络作为基础模型。基础模型由多层的Perceptron(MLP)组合,考虑到元素,表示与每个基础模型的预测精度最高的相关性。我们进一步表明,包含Meta-Features通常会在七和十四天的两个预测视野中提高集合的预测准确性。该研究强化了以前的工作,并展示了与深层学习模型相结合的传统统计模型的价值,以生产更多来自不同领域和季节性的时间序列的预测模型。
translated by 谷歌翻译
杂交和集合学习技术是改善预测方法的预测能力的流行模型融合技术。通过有限的研究,将这两种有前途的方法结合在一起,本文着重于不同合奏的基础模型池中指数平滑的旋转神经网络(ES-RNN)的实用性。我们将某些最先进的结合技术和算术模型平均作为基准进行比较。我们对M4预测数据集进行了100,000个时间序列,结果表明,基于特征的预测模型平均(FFORFORA)平均是与ES-RNN的晚期数据融合的最佳技术。但是,考虑到M4的每日数据子集,堆叠是处理所有基本模型性能相似的情况下唯一成功的合奏。我们的实验结果表明,与N-Beats作为基准相比,我们达到了艺术的预测结果。我们得出的结论是,模型平均比模型选择和堆叠策略更强大。此外,结果表明,提高梯度对于实施合奏学习策略是优越的。
translated by 谷歌翻译
The ability to convert reciprocating, i.e., alternating, actuation into rotary motion using linkages is hindered fundamentally by their poor torque transmission capability around kinematic singularity configurations. Here, we harness the elastic potential energy of a linear spring attached to the coupler link of four-bar mechanisms to manipulate force transmission around the kinematic singularities. We developed a theoretical model to explore the parameter space for proper force transmission in slider-crank and rocker-crank four-bar kinematics. Finally, we verified the proposed model and methodology by building and testing a macro-scale prototype of a slider-crank mechanism. We expect this approach to enable the development of small-scale rotary engines and robotic devices with closed kinematic chains dealing with serial kinematic singularities, such as linkages and parallel manipulators.
translated by 谷歌翻译
The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
Text-based personality computing (TPC) has gained many research interests in NLP. In this paper, we describe 15 challenges that we consider deserving the attention of the research community. These challenges are organized by the following topics: personality taxonomies, measurement quality, datasets, performance evaluation, modelling choices, as well as ethics and fairness. When addressing each challenge, not only do we combine perspectives from both NLP and social sciences, but also offer concrete suggestions towards more valid and reliable TPC research.
translated by 谷歌翻译
Strategic test allocation plays a major role in the control of both emerging and existing pandemics (e.g., COVID-19, HIV). Widespread testing supports effective epidemic control by (1) reducing transmission via identifying cases, and (2) tracking outbreak dynamics to inform targeted interventions. However, infectious disease surveillance presents unique statistical challenges. For instance, the true outcome of interest - one's positive infectious status, is often a latent variable. In addition, presence of both network and temporal dependence reduces the data to a single observation. As testing entire populations regularly is neither efficient nor feasible, standard approaches to testing recommend simple rule-based testing strategies (e.g., symptom based, contact tracing), without taking into account individual risk. In this work, we study an adaptive sequential design involving n individuals over a period of {\tau} time-steps, which allows for unspecified dependence among individuals and across time. Our causal target parameter is the mean latent outcome we would have obtained after one time-step, if, starting at time t given the observed past, we had carried out a stochastic intervention that maximizes the outcome under a resource constraint. We propose an Online Super Learner for adaptive sequential surveillance that learns the optimal choice of tests strategies over time while adapting to the current state of the outbreak. Relying on a series of working models, the proposed method learns across samples, through time, or both: based on the underlying (unknown) structure in the data. We present an identification result for the latent outcome in terms of the observed data, and demonstrate the superior performance of the proposed strategy in a simulation modeling a residential university environment during the COVID-19 pandemic.
translated by 谷歌翻译
AASM准则是为了有一种常用的方法,旨在标准化睡眠评分程序的数十年努力的结果。该指南涵盖了从技术/数字规格(例如,推荐的EEG推导)到相应的详细睡眠评分规则到年龄的几个方面。在睡眠评分自动化的背景下,与许多其他技术相比,深度学习表现出更好的性能。通常,临床专业知识和官方准则对于支持自动睡眠评分算法在解决任务时至关重要。在本文中,我们表明,基于深度学习的睡眠评分算法可能不需要充分利用临床知识或严格遵循AASM准则。具体而言,我们证明了U-Sleep是一种最先进的睡眠评分算法,即使使用临床非申请或非规定派生,也可以解决得分任务,即使无需利用有关有关的信息,也无需利用有关有关的信息。受试者的年代年龄。我们最终加强了一个众所周知的发现,即使用来自多个数据中心的数据始终导致与单个队列上的培训相比,可以使性能更好。确实,我们表明,即使增加了单个数据队列的大小和异质性,后者仍然有效。在我们的所有实验中,我们使用了来自13个不同临床研究的28528多个多摄影研究研究。
translated by 谷歌翻译
ICECUBE是一种用于检测1 GEV和1 PEV之间大气和天体中微子的光学传感器的立方公斤阵列,该阵列已部署1.45 km至2.45 km的南极的冰盖表面以下1.45 km至2.45 km。来自ICE探测器的事件的分类和重建在ICeCube数据分析中起着核心作用。重建和分类事件是一个挑战,这是由于探测器的几何形状,不均匀的散射和冰中光的吸收,并且低于100 GEV的光,每个事件产生的信号光子数量相对较少。为了应对这一挑战,可以将ICECUBE事件表示为点云图形,并将图形神经网络(GNN)作为分类和重建方法。 GNN能够将中微子事件与宇宙射线背景区分开,对不同的中微子事件类型进行分类,并重建沉积的能量,方向和相互作用顶点。基于仿真,我们提供了1-100 GEV能量范围的比较与当前ICECUBE分析中使用的当前最新最大似然技术,包括已知系统不确定性的影响。对于中微子事件分类,与当前的IceCube方法相比,GNN以固定的假阳性速率(FPR)提高了信号效率的18%。另外,GNN在固定信号效率下将FPR的降低超过8(低于半百分比)。对于能源,方向和相互作用顶点的重建,与当前最大似然技术相比,分辨率平均提高了13%-20%。当在GPU上运行时,GNN能够以几乎是2.7 kHz的中位数ICECUBE触发速率的速率处理ICECUBE事件,这打开了在在线搜索瞬态事件中使用低能量中微子的可能性。
translated by 谷歌翻译